99 research outputs found

    Learning to automatically detect features for mobile robots using second-order Hidden Markov Models

    Get PDF
    In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks) are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.Comment: 200

    Multiple Sensor Fusion and Classification for Moving Object Detection and Tracking

    No full text
    International audience—The accurate detection and classification of moving objects is a critical aspect of Advanced Driver Assistance Systems (ADAS). We believe that by including the objects classification from multiple sensors detections as a key component of the object's representation and the perception process, we can improve the perceived model of the environment. First, we define a composite object representation to include class information in the core object's description. Second , we propose a complete perception fusion architecture based on the Evidential framework to solve the Detection and Tracking of Moving Objects (DATMO) problem by integrating the composite representation and uncertainty management. Finally, we integrate our fusion approach in a real-time application inside a vehicle demonstrator from the interactIVe IP European project which includes three main sensors: radar, lidar and camera. We test our fusion approach using real data from different driving scenarios and focusing on four objects of interest: pedestrian, bike, car and truck

    Object Perception for Intelligent Vehicle Applications: A Multi-Sensor Fusion Approach

    Get PDF
    International audienceThe paper addresses the problem of object perception for intelligent vehicle applications with main tasks of detection, tracking and classification of obstacles where multiple sensors (i.e.: lidar, camera and radar) are used. New algorithms for raw sensor data processing and sensor data fusion are introduced making the most information from all sensors in order to provide a more reliable and accurate information about objects in the vehicle environment. The proposed object perception module is implemented and tested on a demonstrator car in real-life traffics and evaluation results are presented

    Audiovisual data fusion for successive speakers tracking

    No full text
    International audienceIn this paper, a human speaker tracking method on audio and video data is presented. It is applied to con- versation tracking with a robot. Audiovisual data fusion is performed in a two-steps process. Detection is performed independently on each modality: face detection based on skin color on video data and sound source localization based on the time delay of arrival on audio data. The results of those detection processes are then fused thanks to an adaptation of bayesian filter to detect the speaker. The robot is able to detect the face of the talking person and to detect a new speaker in a conversation

    Fusion at Detection Level for Frontal Object Perception

    Get PDF
    International audienceIntelligent vehicle perception involves the correct detection and tracking of moving objects. Taking into account all the possible information at early levels of the perception task can improve the final model of the environment. In this paper, we present an evidential fusion framework to represent and combine evidence from multiple lists of sensor detections. Our fusion framework considers the position, shape and appearance information to represent, associate and combine sensor detections. Although our approach takes place at detection level, we propose a general architecture to include it as a part of a whole perception solution. Several experiments were conducted using real data from a vehicle demonstrator equipped with three main sensors: lidar, radar and camera. The obtained results show improvements regarding the reduction of false detections and mis-classifications of moving objects

    Fusion at Detection Level for Frontal Object Perception

    No full text
    International audienceIntelligent vehicle perception involves the correct detection and tracking of moving objects. Taking into account all the possible information at early levels of the perception task can improve the final model of the environment. In this paper, we present an evidential fusion framework to represent and combine evidence from multiple lists of sensor detections. Our fusion framework considers the position, shape and appearance information to represent, associate and combine sensor detections. Although our approach takes place at detection level, we propose a general architecture to include it as a part of a whole perception solution. Several experiments were conducted using real data from a vehicle demonstrator equipped with three main sensors: lidar, radar and camera. The obtained results show improvements regarding the reduction of false detections and mis-classifications of moving objects

    Fusion de données multi capteurs pour la détection et le suivi d'objets mobiles à partir d'un véhicule autonome

    Get PDF
    La perception est un point clé pour le fonctionnement d'un véhicule autonome ou même pour un véhicule fournissant des fonctions d'assistance. Un véhicule observe le monde externe à l'aide de capteurs et construit un modèle interne de l'environnement extérieur. Il met à jour en continu ce modèle de l'environnement en utilisant les dernières données des capteurs. Dans ce cadre, la perception peut être divisée en deux étapes : la première partie, appelée SLAM (Simultaneous Localization And Mapping) s'intéresse à la construction d'une carte de l'environnement extérieur et à la localisation du véhicule hôte dans cette carte, et deuxième partie traite de la détection et du suivi des objets mobiles dans l'environnement (DATMO pour Detection And Tracking of Moving Objects). En utilisant des capteurs laser de grande précision, des résultats importants ont été obtenus par les chercheurs. Cependant, avec des capteurs laser de faible résolution et des données bruitées, le problème est toujours ouvert, en particulier le problème du DATMO. Dans cette thèse nous proposons d'utiliser la vision (mono ou stéréo) couplée à un capteur laser pour résoudre ce problème. La première contribution de cette thèse porte sur l'identification et le développement de trois niveaux de fusion. En fonction du niveau de traitement de l'information capteur avant le processus de fusion, nous les appelons "fusion bas niveau", "fusion au niveau de la détection" et "fusion au niveau du suivi". Pour la fusion bas niveau, nous avons utilisé les grilles d'occupations. Pour la fusion au niveau de la détection, les objets détectés par chaque capteur sont fusionnés pour avoir une liste d'objets fusionnés. La fusion au niveau du suivi requiert le suivi des objets pour chaque capteur et ensuite on réalise la fusion entre les listes d'objets suivis. La deuxième contribution de cette thèse est le développement d'une technique rapide pour trouver les bords de route à partir des données du laser et en utilisant cette information nous supprimons de nombreuses fausses alarmes. Nous avons en effet observé que beaucoup de fausses alarmes apparaissent sur le bord de la route. La troisième contribution de cette thèse est le développement d'une solution complète pour la perception avec un capteur laser et des caméras stéréo-vision et son intégration sur un démonstrateur du projet européen Intersafe-2. Ce projet s'intéresse à la sécurité aux intersections et vise à y réduire les blessures et les accidents mortels. Dans ce projet, nous avons travaillé en collaboration avec Volkswagen, l'Université Technique de Cluj-Napoca, en Roumanie et l'INRIA Paris pour fournir une solution complète de perception et d'évaluation des risques pour le démonstrateur de Volkswagen.Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Laser-based detection and tracking moving objects using data-driven Markov chain Monte Carlo

    Get PDF
    International audienceWe present a method of simultaneous detection and tracking moving objects from a moving vehicle equipped with a single layer laser scanner. A model-based approach is introduced to interpret the laser measurement sequence by hypotheses of moving object trajectories over a sliding window of time. Knowledge of various aspects including object model, measurement model, motion model are integrated in one theoretically sound Bayesian framework. The data-driven Markov chain Monte Carlo (DDMCMC) technique is used to sample the solution space effectively to find the optimal solution. Experiments and results on real-life data of urban traffic show promising results

    Fusion Framework for Moving-Object Classification

    No full text
    International audiencePerceiving the environment is a fundamental task for Advance Driver Assistant Systems. While simultaneous localization and mapping represents the static part of the environment, detection and tracking of moving objects aims at identifying the dynamic part. Knowing the class of the moving objects surrounding the vehicle is a very useful information to correctly reason, decide and act according to each class of object, e.g. car, truck, pedestrian, bike, etc. Active and passive sensors provide useful information to classify certain kind of objects, but perform poorly for others. In this paper we present a generic fusion framework based on Dempster-Shafer theory to represent and combine evidence from several sources. We apply the proposed method to the problem of moving object classification. The method combines information from several lists of moving objects provided by different sensor-based object detectors. The fusion approach includes uncertainty from the reliability of the sensors and their precision to classify specific types of objects. The proposed approach takes into account the instantaneous information at current time and combines it with fused information from previous times. Several experiments were conducted in highway and urban scenarios using a vehicle demonstrator from the interactIVe European project. The obtained results show improvements in the combined classification compared with individual class hypothesis from the individual detector modules

    An Evidential Filter for Indoor Navigation of a Mobile Robot in Dynamic Environment

    No full text
    International audienceRobots are destined to live with humans and perform tasks for them. In order to do that, an adapted representation of the world including human detection is required. Evidential grids enable the robot to handle partial information and ignorance, which can be useful in various situations. This paper deals with an audiovisual perception scheme of a robot in indoor environment (apartment, house..). As the robot moves, it must take into account its environment and the humans in presence. This article presents the key-stages of the multimodal fusion: an evidential grid is built from each modality using a modified Dempster combination, and a temporal fusion is made using an evidential filter based on an adapted version of the generalized bayesian theorem. This enables the robot to keep track of the state of its environment. A decision can then be made on the next move of the robot depending on the robot's mission and the extracted information. The system is tested on a simulated environment under realistic conditions
    • …
    corecore